Deep Multimodal Guidance for Medical Image Classification
نویسندگان
چکیده
AbstractMedical imaging is a cornerstone of therapy and diagnosis in modern medicine. However, the choice modality for particular theranostic task typically involves trade-offs between feasibility using (e.g., short wait times, low cost, fast acquisition, reduced radiation/invasiveness) expected performance on clinical diagnostic accuracy, efficacy treatment planning guidance). In this work, we aim to apply knowledge learned from less feasible but better-performing (superior) guide utilization more-feasible yet under-performing (inferior) steer it towards improved performance. We focus application deep learning image-based diagnosis. develop light-weight guidance model that leverages latent representation superior modality, when training consumes only inferior modality. examine advantages our method context two applications: multi-task skin lesion classification dermoscopic images brain tumor multi-sequence magnetic resonance (MRI) histopathology images. For both these scenarios show boost without requiring Furthermore, case classification, outperforms trained while producing comparable results uses modalities during inference. make code models available at: https://github.com/mayurmallya/DeepGuide.KeywordsDeep learningMultimodal learningClassificationStudent-teacher learningKnowledge distillationSkin lesionsBrain tumors
منابع مشابه
Multimodal medical image fusion based on Yager’s intuitionistic fuzzy sets
The objective of image fusion for medical images is to combine multiple images obtained from various sources into a single image suitable for better diagnosis. Most of the state-of-the-art image fusing technique is based on nonfuzzy sets, and the fused image so obtained lags with complementary information. Intuitionistic fuzzy sets (IFS) are determined to be more suitable for civilian, and medi...
متن کاملDeep Features for Multimodal Emotion Classification
Understanding human emotion when perceiving audio-visual content is an exciting and important research avenue. Thus, there have been emerging attempts to predict the emotion elicited by video clips or movies recently. While most existing approaches focus either on single modality, i.e., only audio or visual data is exploited, or build on a multimodal scheme with late fusion, we propose a multim...
متن کاملDeep Similarity Learning for Multimodal Medical Images
An effective similarity measure for multi-modal images is crucial for medical image fusion in many clinical applications. The underlining correlation across modalities is usually too complex to be modelled by intensity-based statistical metrics. Therefore, approaches of learning a similarity metric are proposed in recent years. In this work, we propose a novel deep similarity learning method th...
متن کاملDeep Unsupervised Domain Adaptation for Image Classification via Low Rank Representation Learning
Domain adaptation is a powerful technique given a wide amount of labeled data from similar attributes in different domains. In real-world applications, there is a huge number of data but almost more of them are unlabeled. It is effective in image classification where it is expensive and time-consuming to obtain adequate label data. We propose a novel method named DALRRL, which consists of deep ...
متن کاملThree-Dimensional Multimodal Image-Guidance for Neurosurgery - Medical Imaging, IEEE Transactions on
We address the use of multimodality imaging as an aid to the planning and guidance of neurosurgical procedures, and discuss the integration of anatomical (CT and MRI), vascular (DSA), and functional (PET) data for presentation to the surgeon during surgery. Our workstation is an enhancement of a commercially available system, and in addition to the guidance offered via a hand-held probe, it inc...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Lecture Notes in Computer Science
سال: 2022
ISSN: ['1611-3349', '0302-9743']
DOI: https://doi.org/10.1007/978-3-031-16449-1_29